4 research outputs found

    Of Humans, Machines, and Extremism: The Role of Platforms in Facilitating Undemocratic Cognition

    Get PDF
    The events surrounding the 2020 U.S. election and the January 6 insurrection have challenged scholarly understanding of concepts like collective action, radicalization, and mobilization. In this article, we argue that online far-right radicalization is better understood as a form of distributed cognition, in which the groups’ online environment incentivizes certain patterns of behavior over others. Namely, these platforms organize their users in ways that facilitate a nefarious form of collective intelligence, which is amplified and strengthened by systems of algorithmic curation. In short, these platforms reflect and facilitate undemocratic cognition, fueled by affective networks, contributing to events like the January 6 insurrection and far-right extremism more broadly. To demonstrate, we apply this framing to a case study (the “Stop the Steal” movement) to illustrate how this framework can make sense of radicalization and mobilization influenced by undemocratic cognition

    Safe from “harm”: The Governance of Violence by Platforms

    Get PDF
    A number of issues have emerged related to how platforms moderate and mitigate “harm.” Although platforms have recently developed more explicit policies in regard to what constitutes “hate speech” and “harmful content,” it appears that platforms often use subjective judgments of harm that specifically pertains to spectacular, physical violence—but harm takes on many shapes and complex forms. The politics of defining “harm” and “violence” within these platforms are complex and dynamic, and represent entrenched histories of how control over these definitions extends to people\u27s perceptions of them. Via a critical discourse analysis of policy documents from three major platforms (Facebook, Twitter, and YouTube), we argue that platforms\u27 narrow definitions of harm and violence are not just insufficient but result in these platforms engaging in a form of symbolic violence. Moreover, the platforms position harm as a floating signifier, imposing conceptions of not just what violence is and how it manifests, but who it impacts. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm. We provide a number of suggestions, namely a restorative justice-focused approach, in addressing platform harm
    corecore